Why is Available Physical Memory (dwAvailPhys) > Available Virtual Memory (dwAvailVirtual) in call GlobalMemoryStatus on Windows Vista x64?

The amount of virtual memory is limited by size of the address space - which is 4GB per process on a 32-bit system. And you have to subtract from this the size of regions reserved for system use and the amount of VM used already by your process (including all the libraries mapped to its address space) On the other hand, the total amount of physical memory may be higher than the amount of virtual memory space the system has left free for your process to use (and these days it often is) This means that if you have more than ~2GB or RAM, you can't use all your physical memory in one process (since there's not enough virtual memory space to map it to), but it can be used by many processes. Note that this limitation is removed in a 64-bit system.

The amount of virtual memory is limited by size of the address space - which is 4GB per process on a 32-bit system. And you have to subtract from this the size of regions reserved for system use and the amount of VM used already by your process (including all the libraries mapped to its address space). On the other hand, the total amount of physical memory may be higher than the amount of virtual memory space the system has left free for your process to use (and these days it often is).

This means that if you have more than ~2GB or RAM, you can't use all your physical memory in one process (since there's not enough virtual memory space to map it to), but it can be used by many processes. Note that this limitation is removed in a 64-bit system.

Ah-hah! I think I was missing something basic and you (slacker) seem to be pointing it out. Is it the case that dwAvailVirtual refers to the available memory for that process whereas dwAvailPhys refers to the Available RAM for the entire system?

Then it would make sense that often dwAvailPhys > dwAvailVirtual. The process could have used up almost all it's 2GB (or 4GB? ), but there's still tons of physical RAM available (for other processes.

Thanks,Andrew – Dave Mar 17 '10 at 23:19 As an aside, you say each process gets 4GB on a 32 bit system. But then you mention 2GB? What is the limit for each process?

And what's the layman's explanation? How does 32 bit translate to 4GB? Thanks – Dave Mar 17 '10 at 23:21 It is exactly like that.

Each process gets its own virtual memory space, while physical memory is (obviously) shared by the entire system. A 32-bit number can have 2^32 = 4G different values. If it is used as a byte-address, this means it can address 4G different bytes.

That is the whole 32-bit address space. Now a part of it is reserved for kernel use, and kernel addresses can't be used in user code. On Win32, it is by default the upper 2GB of address space - leaving lower 2GB usable by the application.

– slacker Mar 18 '10 at 18:17 Note that by using certain linker options you can tell Windows to reserve only 1GB for kernel, leaving 3GB to user. This mode is not the default, as it can break some older code relying on the "classic" address space division. So you must explicitly tell the system your app can handle it to have it turned on.

– slacker Mar 18 '10 at 18:25 Also note that when your application is started, a bunch of system libraries are mapped into its address space together with its image. This means that from the beginning part of its address space is used. You can't count reliably on getting more than ~1.5G free for your own use (you'll probably get more, but the exact amount will vary depending on the system, so don't rely on it).

– slacker Mar 18 '10 at 18:37.

That was only true in the olden days, back when RAM was expensive. The operating system maps pages of virtual memory to RAM as needed. If there isn't enough RAM to satisfy a program's request, it starts unmapping pages to make room.

If such a page contains data instead of code, it gets written to the paging file. Whenever the program accesses that page again, it generates a paging fault, letting the operating system read the page back from disk. If the machine has little RAM and lots of processes consuming virtual memory pages, that can cause a very unpleasant effect called "thrashing".

The operating system is constantly accessing the disk and machine performance slows down to a crawl. More RAM means less disk access. There's very little reason not to use 3 or 4 GB of RAM on a 32-bit operating system, it's cheap.

Even if you do not get to use all 4 GB, not all of it will be addressable due hardware devices taking space on the address bus (video, mostly). But that won't change the size of the virtual memory accessible by user code, it is still 2 Gigabytes. Windows Internals is a good book.

Thanks. Will buy Windows Internals. – Dave Mar 17 '10 at 20:51 On a 32-bit machine, the 2 GB VA limit is per process.

I bring this up because a common misconception is that it is across all processes. So user code can take advantage of more than 2 GB, so long as multiple processes are involved. There is also a /3GB Windows OS startup switch that makes this limit 3 gigabytes instead... – binarycoder Mar 18 '10 at 3:55.

I don't know if this is your issue, but the MSDN page for the GlobalMemoryStatus function contains the following warning: On computers with more than 4 GB of memory, the GlobalMemoryStatus function can return incorrect information, reporting a value of –1 to indicate an overflow. For this reason, applications should use the GlobalMemoryStatusEx function instead. Additionally, that page says: On Intel x86 computers with more than 2 GB and less than 4 GB of memory, the GlobalMemoryStatus function will always return 2 GB in the dwTotalPhys member of the MEMORYSTATUS structure.

Similarly, if the total available memory is between 2 and 4 GB, the dwAvailPhys member of the MEMORYSTATUS structure will be rounded down to 2 GB. If the executable is linked using the /LARGEADDRESSAWARE linker option, then the GlobalMemoryStatus function will return the correct amount of physical memory in both members. Since you're referring to members like dwAvailPhys instead of ullAvailPhys, it sounds like you're using a MEMORYSTATUS structure instead of a MEMORYSTATUSEX structure.

I don't know the consequences of that on a 64-bit platform, but on a 32-bit platform that definitely could cause incorrect memory sizes to be reported.

Thanks Daniel. I did see that on MSDN and have noted it. But, I don't think it affects me.

The warning is saying that the dwAvailPhys membor could be under it's true value. It's truncated at 2GB. That's fine (for now).

What I'm failing to understand is why the Physical Memory is ever more than the Virtual Memory. Again, it probably stems from my complete ignorance. It sure seems like dwAvailVirtual >= dwAvailPhys always, but this clearly is not the case.

I have exactly 4 GB of memory (the 1stwarning is for >4 GB). I think I will try Ex just in case I'm getting bogus values. – Dave Mar 17 '10 at 20:45.

In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture's various forms of computer data storage (such as random-access memory and disk storage), allowing a program to be designed as though there is only one kind of memory, "virtual" memory, which behaves like directly and contiguous addressable read/write memory. Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.

Memory virtualization can be seen as a generalization of the concept of virtual memory. Virtual memory is an integral part of a modern computer architecture; implementations require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations.

1 Consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid 1980s (e.g. The Apple Lisa is an example of a personal computer of the 1980s that features virtual memory. Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory.

However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory. Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted "jitter" during I/O operations. This is because embedded hardware costs are often kept low by implementing all such operations with software (a technique called bit-banging) rather than with dedicated hardware.

In the 1940s and 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use. 3 To allow for multiprogramming and multitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers.

The concept of virtual memory was developed by the german physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956. 45 Paging was first developed at the University of Manchester as a way to extend the Atlas Computer's working memory by combining its 16 thousand words of primary core memory with an additional 96 thousand words of secondary drum memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959.

3(p2)67 In 1961, the Burroughs Corporation independently released the first commercial computer with virtual memory, the B5000, with segmentation rather than paging. Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult to build specialized hardware; initial implementations slowed down access to memory slightly.

3 There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over;3 an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems. Citation needed The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS.

Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation.

Nearly all implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes. Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit.

Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system.

Systems can have one page table for the whole system, separate page tables for each application and segment, a tree of page tables for large segments or some combination of these. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses.

This part of the operating system creates and manages page tables. If the hardware raises a page fault exception, the paging supervisor accesses secondary storage, returns the page that has the virtual address that resulted in the page fault, updates the page tables to reflect the physical location of the virtual address and tells the translation mechanism to restart the request. When all physical memory is already in use, the paging supervisor must free a page in primary storage to hold the swapped-in page.

The supervisor uses one of a variety of page replacement algorithms such as least recently used to determine which page to free. Operating systems have memory areas that are pinned (never swapped to secondary storage). For example, interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault.

If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions